641 research outputs found

    Spectral analysis of Markov kernels and application to the convergence rate of discrete random walks

    Full text link
    Let {Xn}nN\{X_n\}_{n\in\N} be a Markov chain on a measurable space \X with transition kernel PP and let V:\X\r[1,+\infty). The Markov kernel PP is here considered as a linear bounded operator on the weighted-supremum space \cB_V associated with VV. Then the combination of quasi-compactness arguments with precise analysis of eigen-elements of PP allows us to estimate the geometric rate of convergence ρV(P)\rho_V(P) of {Xn}nN\{X_n\}_{n\in\N} to its invariant probability measure in operator norm on \cB_V. A general procedure to compute ρV(P)\rho_V(P) for discrete Markov random walks with identically distributed bounded increments is specified

    Additional material on bounds of 2\ell^2-spectral gap for discrete Markov chains with band transition matrices

    Full text link
    We analyse the 2(π)\ell^2(\pi)-convergence rate of irreducible and aperiodic Markov chains with NN-band transition probability matrix PP and with invariant distribution π\pi. This analysis is heavily based on: first the study of the essential spectral radius r_ess(P_2(π))r\_{ess}(P\_{|\ell^2(\pi)}) of P_2(π)P\_{|\ell^2(\pi)} derived from Hennion's quasi-compactness criteria; second the connection between the spectral gap property (SG_2\_2) of PP on 2(π)\ell^2(\pi) and the VV-geometric ergodicity of PP. Specifically, (SG_2\_2) is shown to hold under the condition \alpha\_0 := \sum\_{{m}=-N}^N \limsup\_{i\rightarrow +\infty} \sqrt{P(i,i+{m})\, P^*(i+{m},i)}\ \textless{}\, 1. Moreover r_ess(P_2(π))α_0r\_{ess}(P\_{|\ell^2(\pi)}) \leq \alpha\_0. Simple conditions on asymptotic properties of PP and of its invariant probability distribution π\pi to ensure that \alpha\_0\textless{}1 are given. In particular this allows us to obtain estimates of the 2(π)\ell^2(\pi)-geometric convergence rate of random walks with bounded increments. The specific case of reversible PP is also addressed. Numerical bounds on the convergence rate can be provided via a truncation procedure. This is illustrated on the Metropolis-Hastings algorithm

    Computable bounds of 2{\ell}^2-spectral gap for discrete Markov chains with band transition matrices

    Full text link
    We analyse the 2(π)\ell^2(\pi)-convergence rate of irreducible and aperiodic Markov chains with NN-band transition probability matrix PP and with invariant distribution π\pi. This analysis is heavily based on: first the study of the essential spectral radius r_ess(P_2(π))r\_{ess}(P\_{|\ell^2(\pi)}) of P_2(π)P\_{|\ell^2(\pi)} derived from Hennion's quasi-compactness criteria; second the connection between the Spectral Gap property (SG_2\_2) of PP on 2(π)\ell^2(\pi) and the VV-geometric ergodicity of PP. Specifically, (SG_2\_2) is shown to hold under the condition \alpha\_0 := \sum\_{{m}=-N}^N \limsup\_{i\rightarrow +\infty} \sqrt{P(i,i+{m})\, P^*(i+{m},i)}\ \textless{}\, 1 Moreover r_ess(P_2(π))α_0r\_{ess}(P\_{|\ell^2(\pi)}) \leq \alpha\_0. Effective bounds on the convergence rate can be provided from a truncation procedure.Comment: in Journal of Applied Probability, Applied Probability Trust, 2016. arXiv admin note: substantial text overlap with arXiv:1503.0220

    A uniform Berry--Esseen theorem on MM-estimators for geometrically ergodic Markov chains

    Full text link
    Let {Xn}n0\{X_n\}_{n\ge0} be a VV-geometrically ergodic Markov chain. Given some real-valued functional FF, define Mn(α):=n1k=1nF(α,Xk1,Xk)M_n(\alpha):=n^{-1}\sum_{k=1}^nF(\alpha,X_{k-1},X_k), αAR\alpha\in\mathcal{A}\subset \mathbb {R}. Consider an MM estimator α^n\hat{\alpha}_n, that is, a measurable function of the observations satisfying Mn(α^n)minαAMn(α)+cnM_n(\hat{\alpha}_n)\leq \min_{\alpha\in\mathcal{A}}M_n(\alpha)+c_n with {cn}n1\{c_n\}_{n\geq1} some sequence of real numbers going to zero. Under some standard regularity and moment assumptions, close to those of the i.i.d. case, the estimator α^n\hat{\alpha}_n satisfies a Berry--Esseen theorem uniformly with respect to the underlying probability distribution of the Markov chain.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ347 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Regular perturbation of V -geometrically ergodic Markov chains

    Full text link
    In this paper, new conditions for the stability of V-geometrically ergodic Markov chains are introduced. The results are based on an extension of the standard perturbation theory formulated by Keller and Liverani. The continuity and higher regularity properties are investigated. As an illustration, an asymptotic expansion of the invariant probability measure for an autoregressive model with i.i.d. noises (with a non-standard probability density function) is obtained

    On the asymptotic analysis of Littlewood's reliability model for modular software

    No full text
    International audienceWe consider a Markovian model, proposed by Littlewood, to assess the reliability of a modular software. Speci cally , we are interested in the asymptotic properties of the corresponding failure point process. We focus on its time-stationary version and on its behavior when reliability growth takes place. We prove the convergence in distribution of the failure point process to a Poisson process. Additionally, we provide a convergence rate using the distance in variation. This is heavily based on a similar result of Kabanov, Liptser and Shiryayev, for a doubly-stochastic Poisson process where the intensity is governed by a Markov process

    Linear Dynamics for the state vector of Markov chain functions

    No full text
    The attached file may be somewhat different from the published versionInternational audienceLet (\vp(X_n))_n be a function of a finite-state Markov chain (Xn)n(X_n)_n. In this note, we investigate under which conditions the random variable \vp(X_n) have the same distribution as YnY_n (for every nn), where (Yn)n(Y_n)_n is a Markov chain with fixed transition probability matrix. In other words, for a deterministic function \vp, we investigate the conditions under which (Xn)n(X_n)_n is \textit{weakly lumpable for the state vector}. We show that the set of all probability distributions of X0X_0 such that (Xn)n(X_n)_n is weakly lumpable for the state vector can be finitely generated. The connections between our definition of lumpability and usual one's, as the proportional dynamics property, are discussed

    Towards a filter-based EM-algorithm for parameter estimation of Littlewood's software reliability model

    No full text
    International audienceIn this paper, we deal with a continuous-time software reliability model designed by Littlewood. This model may be thought of as a partially observed Markov process. The EM-algorithm is a standard way to estimate the parameters of processes with missing data. The E-step requires the computation of basic statistics related to observed/hidden processes. In this paper, we provide finite-dimensional non-linear filters for these statistics using the innovations method. This allows to plan the use of the filter-based EM-algorithm developed by Elliott

    Strong convergence of a class of non-homogeneous Markov arrival processes to a Poisson process

    No full text
    The attached file may be somewhat different from the published versionInternational audienceIn this paper, we are concerned with a time-inhomogencous version of the Markovian arrival process. Under the assumption that the environment process is asymptotically time-homogeneous, we discuss a Poisson approximation of the counting process of arrivals when the arrivals are rare. We provide a rate of convergence for the distance in variation. Poisson-type approximation for the process resulting of a special marking procedure of the arrivals is outlined

    Une nouvelle description des possibilités d'agrégation d'une chaîne de Markov

    No full text
    International audienceOn considère une chaîne de Markov (homogène) à espace d'état E fini. On est souvent amener à introduire une chaîne dite agrégée, dont l'espace d'état est constitué d'agrégats d'éléments de E. Malheureusement, cette chaîne ne possède plus, en général, la propriété de Markov (homogène). Des études récentes s'intéressent aux calcul de l'ensemble des lois initiales pour lesquelles la chaîne agrégée est markovienne. Dans ce travail, on montre que si la matrice de transition P du modèle initial est irréductible alors toute loi initiale permettant la conservation de la propriété de Markov diffère de la loi invariante de P par un vecteur d'un espace dit d'observabilité trajectorielle. En particulier cela permet de préciser un résultat récent de Peng
    corecore